Cholesky - based Methods for Sparse Least Squares : The Benefits of Regularization ∗

نویسنده

  • Michael A. Saunders
چکیده

We study the use of black-box LDL factorizations for solving the augmented systems (KKT systems) associated with least-squares problems and barrier methods for linear programming (LP). With judicious regularization parameters, stability can be achieved for arbitrary data and arbitrary permutations of the KKT matrix. This offers improved efficiency compared to implementations based on “pure normal equations” or “pure KKT systems”. In particular, the LP matrix may be partitioned arbitrarily as (As Ad ). If AsA T s is unusually sparse, the associated “reduced KKT system” may have very sparse Cholesky factors. Similarly for least-squares problems if a large number of rows of the observation matrix have special structure. Numerical behavior is illustrated on the villainous Netlib models greenbea and pilots. 1 Background The connection between this work and Conjugate-Gradient methods lies in some properties of two CG algorithms, LSQR and CRAIG, for solving linear equations and least-squares problems of various forms. We consider the following problems: Linear equations: Ax = b (1) Minimum length: min ‖x‖ subject to Ax = b (2) Least squares: min ‖Ax− b‖ (3) Regularized least squares: min ‖Ax− b‖ + ‖δx‖ (4) Regularized min length: min ‖x‖ + ‖s‖ subject to Ax+ δs = b (5) where A is a general matrix (square or rectangular) and δ is a scalar (δ ≥ 0). LSQR [17, 18] solves the first four problems, and incidentally the fifth, using essentially the same work and storage per iteration in all cases. The iterates xk reduce ‖b − Axk‖ monotonically. CRAIG [4, 17] solves only compatible systems (1)–(2), with ‖x − xk‖ decreasing monotonically. Since CRAIG is slightly simpler and more economical than LSQR, it may sometimes be preferred for those problems. Partially supported by Department of Energy grant DE-FG03-92ER25117, National Science Foundation grant DMI-9204208, and Office of Naval Research grant N00014-90-J-1242. Systems Optimization Laboratory, Dept of Operations Research, Stanford University, Stanford, California 94305-4022 ([email protected]). Cholesky-based Methods for Least Squares 93 To extend CRAIG to incompatible systems, we have studied problem (5): a compatible system in the combined variables (x, s). If δ > 0, it is readily confirmed that problems (4) and (5) have the same solution x, and that both are solved by either the normal equations Nx = Ab, N ≡ AA+ δI, (6) or the augmented system

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Computational Issues for a New Class of Preconditioners

In this paper we consider solving a sequence of weighted linear least squares problems where the only changes from one problem to the next are the weights and the right hand side (or data). We alternate between iterative and direct methods to solve the normal equations for the least squares problems. The direct method is the Cholesky factorization. For the iterative method we discuss a class of...

متن کامل

Solution of Sparse Rectangular Systems Using Lsqr and Craig

We examine two iterative methods for solving rectangular systems of linear equations : LSQR for over-determined systems Ax ~ b, and Craig's method for under-determined systems Ax = b . By including regularization, we extend Craig's method to incompatible systems, and observe that it solves the same damped least-squares problems as LSQR. The methods may therefore be compared on rectangular syste...

متن کامل

Large-scale Inversion of Magnetic Data Using Golub-Kahan Bidiagonalization with Truncated Generalized Cross Validation for Regularization Parameter Estimation

In this paper a fast method for large-scale sparse inversion of magnetic data is considered. The L1-norm stabilizer is used to generate models with sharp and distinct interfaces. To deal with the non-linearity introduced by the L1-norm, a model-space iteratively reweighted least squares algorithm is used. The original model matrix is factorized using the Golub-Kahan bidiagonalization that proje...

متن کامل

A Multilevel Block Incomplete Cholesky Preconditioner for Solving Rectangular Sparse Matrices from Linear Least Squares Problems

An incomplete factorization method for preconditioning symmetric positive definite matrices is introduced to solve normal equations. The normal equations are formed as a means to solve rectangular matrices from linear least squares problems. The procedure is based on a block incomplete Cholesky factorization and a multilevel recursive strategy with an approximate Schur complement matrix formed ...

متن کامل

Approximate Generalized Inverse Preconditioning Methods for Least Squares Problems

iv erative methods to solve least squares problems more efficiently. We especially focused on one kind of preconditioners, in which preconditioners are the approximate generalized inverses of the coefficient matrices of the least squares problems. We proposed two different approaches for how to construct the approximate generalized inverses of the coefficient matrices: one is based on the Minim...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996